Equivalence Results between Feedforward and Recurrent Neural Networks for Sequences
نویسنده
چکیده
In the context of sequence processing, we study the relationship between single-layer feedforward neural networks, that have simultaneous access to all items composing a sequence, and single-layer recurrent neural networks which access information one step at a time. We treat both linear and nonlinear networks, describing a constructive procedure, based on linear autoencoders for sequences, that given a feedforward neural network shows how to define a recurrent neural network that implements the same function in time. Upper bounds on the required number of hidden units for the recurrent network as a function of some features of the feedforward network are given. By separating the functional from the memory component, the proposed procedure suggests new efficient learning as well as interpretation procedures for recurrent neural networks.
منابع مشابه
On the Equivalence Between Ordinary Neural Networks and Higher Order Neural Networks
In this chapter, we study the equivalence between multilayer feedforward neural networks referred as Ordinary Neural Networks (ONNs) that contain only summation (Sigma) as activation units, and multilayer feedforward Higher order Neural Networks (HONNs) that contains Sigma and product (PI) activation units. Since the time they were introduced by Giles and Maxwell (1987), HONNs have been used in...
متن کاملDecoding spatiotemporal spike sequences via the finite state automata dynamics of spiking neural networks
Temporally complex stimuli are encoded into spatiotemporal spike sequences of neurons in many sensory areas. Here, we describe how downstream neurons with dendritic bistable plateau potentials can be connected to decode such spike sequences. Driven by feedforward inputs from the sensory neurons and controlled by feedforward inhibition and lateral excitation, the neurons transit between UP and D...
متن کاملUnsupervised Learning in Recurrent Neural Networks
While much work has been done on unsupervised learning in feedforward neural network architectures, its potential with (theoretically more powerful) recurrent networks and time-varying inputs has rarely been explored. Here we train Long Short-Term Memory (LSTM) recurrent networks to maximize two information-theoretic objectives for unsupervised learning: Binary Information Gain Optimization (BI...
متن کاملSelf-Organized-Expert Modular Network for Classification of Spatiotemporal Sequences
We investigate a form of modular neural network for classification with (a) pre-separated input vectors entering its specialist (expert) networks, (b) specialist networks which are selforganized (radial-basis function or self-targeted feedforward type) and (c) which fuses (or integrates) the specialists with a single-layer net. When the modular architecture is applied to spatiotemporal sequence...
متن کاملEfficient Evolution of Asymetric Recurrent Neural Networks Using a Two-dimensional Representation
Recurrent neural networks are particularly useful for processing time sequences and simulating dynamical systems. However, methods for building recurrent architectures have been hindered by the fact that available training algorithms are considerably more complex than those for feedforward networks. In this paper, we present a new method to build recurrent neural networks based on evolutionary ...
متن کامل